138 research outputs found

    Spherical Light Field Capture with Rotating Camera Array

    Get PDF
    A light-field capture apparatus is provided that includes a camera housing having a plurality of cameras, each camera providing a different viewpoint of a scene. The apparatus may also include a drop-string mechanism coupled to a top portion of the camera housing and a rotary table pivotably coupled to a portion of the camera housing different than the top portion. The drop-string mechanism includes a string having a first end coupled to the top portion of the camera housing and a second end wound around a center column. The rotary table is rotatable around a center column to change horizontal positions of the camera housing. Additionally, rotating the rotary table causes the string of the drop-string mechanism to unwind around the center column in response to changes in horizontal positions of the camera housing, thereby causing a vertical drop in position of the camera housing

    Circularly polarized spherical illumination reflectometry

    Get PDF

    NeRFactor: Neural Factorization of Shape and Reflectance Under an Unknown Illumination

    Full text link
    We address the problem of recovering the shape and spatially-varying reflectance of an object from multi-view images (and their camera poses) of an object illuminated by one unknown lighting condition. This enables the rendering of novel views of the object under arbitrary environment lighting and editing of the object's material properties. The key to our approach, which we call Neural Radiance Factorization (NeRFactor), is to distill the volumetric geometry of a Neural Radiance Field (NeRF) [Mildenhall et al. 2020] representation of the object into a surface representation and then jointly refine the geometry while solving for the spatially-varying reflectance and environment lighting. Specifically, NeRFactor recovers 3D neural fields of surface normals, light visibility, albedo, and Bidirectional Reflectance Distribution Functions (BRDFs) without any supervision, using only a re-rendering loss, simple smoothness priors, and a data-driven BRDF prior learned from real-world BRDF measurements. By explicitly modeling light visibility, NeRFactor is able to separate shadows from albedo and synthesize realistic soft or hard shadows under arbitrary lighting conditions. NeRFactor is able to recover convincing 3D models for free-viewpoint relighting in this challenging and underconstrained capture setup for both synthetic and real scenes. Qualitative and quantitative experiments show that NeRFactor outperforms classic and deep learning-based state of the art across various tasks. Our videos, code, and data are available at people.csail.mit.edu/xiuming/projects/nerfactor/.Comment: Camera-ready version for SIGGRAPH Asia 2021. Project Page: https://people.csail.mit.edu/xiuming/projects/nerfactor
    • …
    corecore